4 research outputs found

    First International Diagnosis Competition - DXC'09

    Get PDF
    A framework to compare and evaluate diagnosis algorithms (DAs) has been created jointly by NASA Ames Research Center and PARC. In this paper, we present the first concrete implementation of this framework as a competition called DXC 09. The goal of this competition was to evaluate and compare DAs in a common platform and to determine a winner based on diagnosis results. 12 DAs (model-based and otherwise) competed in this first year of the competition in 3 tracks that included industrial and synthetic systems. Specifically, the participants provided algorithms that communicated with the run-time architecture to receive scenario data and return diagnostic results. These algorithms were run on extended scenario data sets (different from sample set) to compute a set of pre-defined metrics. A ranking scheme based on weighted metrics was used to declare winners. This paper presents the systems used in DXC 09, description of faults and data sets, a listing of participating DAs, the metrics and results computed from running the DAs, and a superficial analysis of the results

    Towards a Framework for Evaluating and Comparing Diagnosis Algorithms

    Get PDF
    Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics

    Sequential Scheduling of Observations in Diagnosis of Continuous Dynamic Systems

    No full text
    A fault in a continuous dynamic system is a non nominal value of a physical parameter. Parameter values can usually be estimated with uncertainty given the values of the output variables of the system that have been observed over a time interval. A diagnosis consists of several probability density functions, one for each parameter. Such a diagnosis may be noisy, especially if it is unlikely that the system is affected by several faults at the same time. In order to reduce the uncertainty about the diagnosis, new observations are needed. However, in some contexts observations are expensive, thus there is a maximum number of observations that can be gathered. This paper faces the problem of choosing the best schedule of a limited amount of observations within a given time horizon, in order to minimize the uncertainty of the diagnosis. Several scheduling policies have been tried in order to experimentally compare them

    Fault diagnostics using expert systems

    No full text
    corecore